Andi – New Search Engine Powered by AI and Natural Language Processing

There’s a lot of acronyms out there that people won’t know what they stand for, but if you’ve got any interest in the digital world and / or work in digital marketing you’re going to know what a SERP is. That acronym of course stands for search engine results pages, and it’s well understood that coming in on the 1st page of any browser’s SERPs is pretty darn important for the visibility of your company or whatever interest you have that has you online with your website.

But what if the basis of all that was to be turned on its head, and what the search engine presented to you based on your search query wasn’t a simple list of web pages that match it. That’s the way a new search engine called Andi works, and what makes it different is that it goes further than simply just understanding the question itself. Instead, it uses artificial intelligence and natural language processing to instead understand the question and make educated guesses as to the intent of the question.

The one criteria is that for best results the search terms need to be as detailed as possible, but there’s plenty of times when even the longest and most detailed search terms don’t bring back anything more appropriate than what a more basic version would. This is something that gets our attention fast in the same way it would for any Canadian web hosting provider, and naturally as we make it so that websites are on the web and available for indexing in SERPS.

If there’s room for improvement then that’s always the way to go, and there’s plenty to like about the sound of finding exactly what you’re looking for on the web more quickly. So let’s use this week’s entry here to look at Andi in more detail and see if this isn’t one new web browser that’s about to explode in popularity.

Smart Alternative

The Andi search engine mixes large language models and live web data to come up with a more detailed answer to questions entered by searchers. AI and natural language processing are used to understand a question’s intent, and from there the browser will look at the top 10 to 20 results for any given query. The locations that make the cut are then overviewed with large language models to generate a more direct answer to the question, and ideally one that will get the user to where they want to be quicker.

The makers of Andi have said this is something that has been very needed for a long time, and especially given the way the web has evolved over the better part of 30 years. The belief there is that Google – among others – is built for how the web worked 20 years ago. What’s happened is the cognitive overload generated from ads and links overloading the user has gotten the point where it’s distracting and degenerating the web-searching process.

The aim is to provide direct answers to questions, not simply dump out a list of links where you might find that answer. It’s a real departure from the standard way of thinking around all of this, but it’s something that is going to have more and more merit to it as users demand a better web browsing experience.

Geared for Younger Demographic

It’s also very clear that this new browser is designed for a younger demographic and one that simply doesn’t get the same amount of nookie as the generations that came before them. Andi is likened to getting search results in a social media feed, and that may be all that needs to be said about why this new browser type is going to be a really good fit for younger uses.

People from that age demographic who were able to try Andi out said they like the clean reading appearance of the search results too, and all in all it’s believed to be designed so that it lines up with younger users’ preference for visual feeds and chat apps. These are conversational interfaces, and catering to that preference played a big part in how Andi came to be.

It’s a marked departure from every other approach to designing a web browser, and it’s that conversational interface that is going to allow Andi to take on the big guys in Chrome, Edge, Safari and the like. If you feel that too much information, spam, and clutter in the results

There’s some reason to believe this new type of web browser may appeal to older folks too, and that’s because there’s a concurrent belief that users are tiring of Google’s search algorithms not being as objective as people would like them to be. Takes a very discerning user to be able to even conclude that, but there are ever-greater numbers of them in the older segments of users too.

Search Alternatives Needed

The question becomes though how many people will be easily persuaded to move from their current browser to a new and experimental one like Andi. The one thing that will sway them is if they are aware that there is simply too much information online now, and the problem being that most if is not quality information. Many of the newer search engines simply duplicate the look and feel of Google, but new means of intaking and digesting information may be an impetus to slow change.

There are also other ways this browser is going to be different. There will be no charge for the service and it won’t track users’ personal identifying information. There is already talks of it partnering with tools like Amazon Alexa and other kinds of voice-powered search, but departing from the model of current searches where they are bundled with advertising may create a need to create revenue by some means.

Estimates of More Than 3 of 4 Websites Are Stealing Data

It was a long time ago now when browsing the web was a newfound thing, and of course that will be going back to the days of dial-up modems and all the other archaic stuff that we’ve been fortunate enough to leave far behind now. But as much as using the World Wide Web was more of an ordeal back then, the one thing it may have had going for it was it was a much safer and less risky endeavor. We certainly didn’t hear of malware, spyware, or -ware of pretty much any sort and we weren’t at risk of having our data stolen.

No one would suggest devolving when it comes to the Web given everything we use it for these days, and that sentiment is probably universal. For some though the way we’ve swayed to the other far end of the spectrum needs to be a concern, because cyber security risks are more ominous and more of a threat than ever before too. The way the Internet has evolved and continues to evolve is what makes these risks and threats possible in the same way it makes all of the ‘good’ possible too.

What we’re going to look at with our entry this week is research that suggests that upwards of three-quarters of websites are either stealing data, or have the capacity to do so. Here at 4GoodHosting we’re like any quality Canadian web hosting provider in that this is a subject that makes sense to be shared with our customers and those who might be here reading our blog. It’s fine to be reliant on the web safety protocols you either practice or already have in place for your device, but a question for you then – how much of the time are you on the web and NOT on your home Wi-Fi network?

Probably fairly often now, just like the rest of us. So let’s look at all of this in more detail and try to gauge just how much of a problem this is.

Wary of Search Bars

They may seem like nothing out of the ordinary and never a cause for concern, but seeing a search bar at the top of a webpage may need to be cause for caution now and in the future. See one, use it, and there is a chance that your personal information is in the process of being leaked. Leaked to who? Large networks of advertisers who are keen to get as much of your disposable income as possible. This is known as data crawling on the internet, and it’s happening with increasing frequency all the time.

So frequent in fact that research conducted by Norton Labs has come out with an estimate that more than 80% of websites you visit are sending your search queries to 3rd parties, and they’re given plenty of incentives to do that unfortunately. This security experiment crawled 1 million of the top websites on the net, and then once they’d used the internal site search feature on websites they tracked what happened with their searches. The results were a little shocking with how bad they were.

From what we can understand, the search term they used for each was ‘jellybeans’ and the idea behind that was to have search terms easily found in the network traffic. The results showed that with the top websites having internal site search, 81.3% of them were leaking search terms to 3rd party groups in one form or another.

And these are large, and large-profile websites we’re talking about here. For example, the report said that CNN is among them, and for networks we’re talking about giants like Google too. That end of it is probably less surprising, and who knows where the data goes from there when it’s the company behind the software itself that’s after your data. There is also the indication that there are more ways sites are acquiring and selling user data, but HTTP requests made that too muddy to determine exactly.

Policies Come Up Short

The last thing to say about this here is that privacy policies informing users of how data is collected / handled when they visit or search on the site aren’t anywhere near being entirely upfront about it. The estimate was that around only 13% of privacy policies made it clear that search terms could be collected and redistributed as data. Kats said. This makes it so that regular users are not in the know much at all with how their private data is handled based on the complicated wording usually found in privacy policies.

So what is the average person to do? The most straightforward piece of advice is to block 3rd-party trackers so that you are minimizing how much of your data is collected and then shared. You can set this up on Chrome, but some people already prefer browsers like Safari and Brave that have these tools built in. And then of course you have privacy-focused search engines such as DuckDuckGo or Brave Browser.

Smart Approaches to Website Retrofits

Not often that someone builds a house and then does absolutely nothing to improving their home over the course of the entire time they’re living there. Same thing can be said for a place of business, or the business itself. Nearly every business is in the digital space, and there’s thousands that are doing e-commerce explicitly. In these cases the website may well be the house and home for these business in that space, and the same should apply – your website as it is launched is very likely not a finished product. If you’re pleased with it that’s fine, but it is best to take a critical eye to it. And that’s true a week later or many years later.

It may have been enough to get the project going and begin attracting visitors or customers, but ideally this is only the beginning. Staying relevant often means websites must undergo ongoing re-evaluation and re-design. It may even be necessary to completely change things up if you are having issues with bringing in those visitors. Facilitating the ideal digital environment for any visitor browsing your website is what makes that happen best. Even minor changes can be integral in boosting popularity and traffic.

It has been a long time since we have touched on website design here in our blog, and of course as a leading Canadian web hosting provider taking the initiative to put our customers in the know as much as possible when it comes to their site is always going to be a priority. Not everyone is a webmaster who’ll be in their account’s CP panel often, but if you are and you have the wherewithal of how sites work and rank then you’ll want to know when it’s time to improve your website.

So that’s what we’re going to look at with this week’s entry.


We can start by saying that knowing your product or service is very beneficial. You then take that knowledge and ask yourself if your website’s functionality is delivering that information for customers in a manner that will be conducive to them a) finding what they need in the most straightforward manner, and b) continuing to move further through the site and interacting in it in a way that makes them take the actions you’d prefer them to take.

Here are some questions you can ask yourself:

  • How is the website’s loading time
  • Is navigating the website and locating ___ without prior knowledge of its structure suitably easy
  • Is the design of the website aesthetically pleasing
  • Does the site function as intended, and do you feel that the UI / UX is in line with that
  • Would adding to the website improve it
  • Does the whole of the website represent your vision for if. If not, why

Another idea is to have a friend or professional acquaintance also look over the website while asking them to perform a specific task. If it’s relatively easy for them to be successful with that task then your website is on the right path. You can ask that person the very same questions that we suggest that you ask yourself during personal evaluation.

Redesign Time?

We get that some people need to have greater expectations for their website, while others will do just fine with using a free website builder through their Canadian web hosting provider. For those that need more of a site and are more reliant on it for the profitability of their business, always being open to a functional redesign is the smart way to view your site.

If that’s you, here is a list of potential scenarios where you must take note and address the problem:

  • Website has a higher-than-average bounce rate
  • More than 3 seconds is needed for initial load time
  • New and fresh content is lacking
  • Mobile devices experience your website poorly (you need a mobile website)
  • Navigating the website’s content and functionality isn’t simple
  • Your website’s design doesn’t match the brand
  • The website has dead links

Tips & Tricks for Website Design

There are some universally-positive attributes for websites, and if you’re evaluating your website for quality for the first time then it can be very helpful to know what to be looking for.

It is best to establish a clean and simplistic design, and the fastest way to do that is eliminating clutter and modernizing the elements of your website. This way the appeal is both aesthetic and functional. It helps to also understand that scrolling is faster and easier than clicking, so it’s best to avoid tabs and overlapping graphical elements like carousels, accordions, and sliders.

You should also look into whether the font you are using for text on the website is good for readability. There is plenty of information online about research that indicates certain ones being better than others. In addition, having good contrast between your text color and background color is helpful, and of course the way the text is written is important too. Hire a web copywriter if this if analyzing this is beyond the scope of your abilities.

Next, understand that whitespace is your friend. It helps guide your visitor’s attention and showcase what is important. You should also try to have visual cues to direct visitors to your website’s right places and ensure there’s no reason why they would miss the key elements you’d like to present. Visual hierarchies are very helpful too. You can provide polls, infographics, or interactive graphical elements with a content-based value which can help you increase the interest and improve the readability of on-page information.

Importance of Your CRM system

A CRM is your customer relationship management system. They provide a way for online projects to store customer and visitor data, track interactions, and share this information with colleagues. CRMs also let online businesses manage their relationships with customers for the growth and expansion business owners will be looking for. So how you do choose the right CRM for your website if you are retrofitting the website?

The process should be something like this:

  • Clearly defining and identifying your goals to figure out what CRM’s functionality would serve your needs best
  • Trying demos of CRM services to get an idea of how it would suit your online business
  • Reviewing compatibility with your design and software on an ongoing basis
  • Determining how well implementation of any one CRM will work
  • Evaluating if your team would use it effectively or require additional training to use as needed

Compatibility of Web 3 and Net Zero Transition in Question

One thing is for sure, and that is that there are an awful lot of adult-age humans on the planet right now and with that comes a whole lot of consumption of resources. This same over consumption is occurring in the digital world too, and it is a struggle to stay ahead of roaring demand while still providing consumers with what they want AND not being too wasteful or taxing on the natural environment while doing so. The Internet is a primary means of doing much with what those adults do in their personal and professional lives, and a large part of the challenge with the transition to Web 3 is accommodating the demand while still trying to be environmentally accountable.

Up until recently a lot of people didn’t know that all their online exploits, streaming, surfing and other activities of the like were contributing to global warming. Admittedly not contributing as much as other factors, but it all adds up and we are at a very pivotal time with regards to all this stuff. We won’t go on further about that, but what we do know is that Web3 and its applications, platforms, and assets have the potential to get us past some of the biggest challenges to global sustainability. But how actually compatible is Web3 when it comes to being a contributor to a net zero future like it’s being touted?

This is something that any good Canadian web hosting provider will be keeping an eye on, and those of us here at 4GoodHosting are no exception as much of what we do provides the underlying role in making websites available and facilitating all that happens after that. There are new projects that example how Web 3 is on the right track with this, but there’s also concerns about whether this lines up when the transition is complete.

There’s no backtracking to be done on this, so let’s use our entry this week to talk about Web 3 and Global Sustainability in Computing.

New Systems Need

The decentralized autonomous organizations, tokens and blockchain networks of Web3 have immeasurable potential to be the foundations of a sustainable world, but there is even more of a critical view to it based on the IPCC’s latest report advising transformative new systems are urgently needed to adhere to the Paris Agreement and to keep global temperatures below the mandated 1.5C degrees. But are disruptive digital technologies pushing climate change along faster in the same way so much else in our lives are?

Web3 is often referred to as the ‘read-write-own’ web because it puts control over governance and operations into the hands of its global users and promotes different avenues for value creation as seen in NFTs most notably. People who see the advantage it are quick to talk about ‘integrity’ when it comes to blockchain technology here, and it is true that they act as the foundation for Web3’s digital assets offer unique advantages for overcoming challenges to meaningful progress on sustainability. 

All good, but there are studies that green assets like voluntary carbon credits aren’t making the difference that was foreseen for them, as more than 90% of credits fail to meet basic offset criteria. So there’s more demand for real proof of green value, and tokenization is the answer here as it offers the ability to trace origins and provenance of an asset through its metadata. Long story short, it is better in at least one regard because it is super transparent.

Automated Asset Generation

Another big selling point for Web3 is the way that it will allow Metadata information to be integrated with business systems and device intelligence for automated asset generation. There can be a carbon credit that pulls data from energy intelligence software, before verifying and certifying it through an ecosystem of authorities before it eventually stands separately as a fully tradeable and traceable digital asset.

Diverse types of ESG assets beyond carbon offsets can benefit from this asset verification process, and much has been made of how this will be ideal as certifications determining the legitimacy of low-emissions products like food or fuel, units of renewable power, and investments in sustainable projects like green energy or conservation.

Compatible platforms for the kind of borderless collaboration and international marketplaces required for a successful net zero transition are also going to be needed, and it is not as clear whether these developments are where they need to be given the imminent widespread incorporation of Web3 operating spheres. This shift is going to be huge, and whether there’s going to be the degree of collaboration needed remains to be seen.

One thing we do know though is that tokenization definitely has the potential to be an ideal way to encode leading global standards, and hold producers more reliably to account as to whether or not they are doing their part for the collective good when it comes to meeting goals for emission standards in the near future. 

A decentralized future is what we should be aiming for here, where most people play an active daily role in the energy transition, rather than its current domination by industry and government. Web3 could do for the energy transition what it is predicted to do for the internet – bring greater numbers of contributors into the battle for a better future and starting with our information systems as we know them in the here and now.

Smartphone Use to Improve Memory?

Everyone will know that the standard understanding is that too much screen time is bad for us, and that includes the small screens on our phones. Blue light is a problem, and especially when it is stimulating you in the evening when you could be winding down and getting ready for sleep. A whole lot of people will admit that they spend too much time on their device, but it turns out that may not be entirely a bad thing. It may still be taking away from your productivity and some people are definitely hooked on playing games on their phone, although it seems there may be an upside to interacting with handheld devices.

There is research that is indicating that by entering and keeping information in your phone, you can actually improve your memory for information that is not stored in your phone. Meaning the information you have to reference with brainpower alone and not with your handheld digital assistant. Research published in the journal of Experimental Psychology suggests that much of it is in the way the devices ‘take a load off’ us with having to remember as much as we do, and it frees up memory to recall even more of the less important stuff.

Whether overuse of smartphones or other digital technologies could promote cognitive decline in other contexts remains to be seen, but apparently you’re not going to have your memory become worse by using your smartphone. It might actually get better. This is something that will be of interest to any good Canadian web hosting provider like us here at 4GoodHosting given that mobile web browsing is much more dominant than desktop or notebook these days, and being focused on having a good mobile website for your business is a need.

Even if you’re Let’s use this entry to talk about smartphone use and memory though and how it may actually improve if you’re very active with yours in keeping important information.


In the past neuroscience has expressed concerns that the overuse of technology could result in the breakdown of cognitive abilities and cause what they termed ‘digital dementia’. The contrary results we talked about earlier were based a memory task that 158 study participants between ages of 18 and 71 played on a touchscreen digital tablet or computer.

They were shown up to 12 numbered circles on the screen, and then needed to drag some of these to the left and some to the right. Their success was based on the number of circles that they remembered to drag to the correct side. One side was designated ‘high value’, and dragging a circle to this side was worth 10 times as many points as remembering to drag a circle over to the other ‘low value’ side.

Participants performed this task 16 times. The key stipulation though was that they had to use their own memory to remember on half of the trials but allowed to set reminders on the digital device for the other half.

Most of the participants used the digital devices to store the details of the high-value circles and their memory for those circles went up 18%. Their memory for low-value circles was also improved by 27%, even for participants who had never set reminders for low-value circles.


Using reminders also proved that many participants were doing so at a cost to their memory, because they would forget them shortly after setting reminders for them and were inherently trusting in the device to do it for them. This is what all of us do with contacts, numbers, and much more. When they were taken away, the participants remembered the low-value circles better than the high-value ones.

When people were allowed to use an external memory, their smartphone or explosive device helped them to remember the information they had saved into it. The research also showed that the device improved people’s memory for unsaved information as well, and the belief is that when people had to remember by themselves, they used their memory capacity to remember the most important information. But when they could use the device, they saved high-importance information into the device and then utilized their own memory for less important information.

All interesting stuff and it may make you think about freeing up space every time you enter something into your calendar or put a new contact into your phone.

Use of Malicious Browser Extensions Becoming More Commonplace

It is going back a long ways but there are some who have been in and around the digital space long enough to be able to tell you of a time when browser extensions weren’t even anything to discuss, and that’s simply because they didn’t exist. Why didn’t they exist the way they do today? Well, the simple answer is that web browsers didn’t need to have modifications made to the flows of information back then, and that’s because the information was more standardized in the way it was presented. The fact that the information available via the web is accommodated much more thoroughly now is a good thing, and browser extensions are a large part of why that’s the way it is.

Some may not have the web world wherewithal to know what a browser extension is in the first place, and if so that’s perfectly alright. They are small software modules installed to customize your browser’s appearance or function, and they can serve any number of purposes from enabling email encryption, ad-blocking, one-click password storage, spell checking, dark mode, calendar add-ons, and more. Browser extension are also fairly essential for online shopping, something pretty much everyone does these days at least to some extent.

All is well to a point, but where that point ends is when browser extensions become harmful rather than beneficial. And that’s the reality nowadays, and it is something that is of interest to us here at 4GoodHosting in the same way it would be for any quality Canadian web hosting provider. That’s because it’s very much in line with who we are and what we do – namely hosting websites and ones that may have browser extension compatibility formatted into them for any number of perfectly good reasons.

What makes all of this blog worthy today is that it seems that malicious browser extensions are becoming more common, and it’s more common for developers to be using them. Let’s look at why that is this week.

With 3rd Party Developers

The reason that bad-actor browser extensions are so common is that while there are many of them, only some of them are made by the developers of the primary browsers themselves. Most extensions are made by 3rd-party developers, and as you’d guess many are of no renown whatsoever and as such can make and distribute whatever they please without needing to account for the offering. In fact, malicious browser extensions are becoming so widespread that millions of users have them installed and many may not even be aware of having done so.

A report that analyzes telemetry data from an endpoint protection solution and found that in the last 2.5 years (January 2020 – June 2022), more than 4.3 million unique users attacked by adware hiding in browser extensions were recorded. That works out to some 70% of all affected users encountering a threat of this type, and if that type of widespread occurrence is transposed on a larger scale for everyone who is using the internet for a specific task then it makes clear the large extent of the problem.

The same report further suggests that implemented preventative measures put in place over recent years are responsible for more than 6 million users avoiding downloading malware, adware, and riskware that had been disguised as harmless browser extensions.

Adware and Malware

These extensions serve to target users with adware and other forms of malware and do so repeatedly, and again most people equipped with them have no idea what’s going on. The most common type of malicious browser extension is adware, which is unwanted software used to push affiliates rather than improving user experience in any way. They function by monitoring user behavior through browser history before redirecting them to affiliate pages and providing a commission for the extension’s makers.

The biggest culprit? WebSearch, detected by antivirus(opens in new tab) programs as not-a-virus:HEUR:AdWare.Script.WebSearch.gen. It has been inadvertently downloaded nearly a staggering 900,000 times.

This extension is promoted on the basis of the way the tool is designed to improved the working experience for those who need to do tasks like converting between .doc and .pdf files, but what it does in reality is change the browser’s start page and redirecting resources to earn extra money through affiliate links. It also changes the browser’s default search engine to myway for capturing user queries and then collecting and analyzing them, so that certain affiliate links are then served to the carrier in search engine results pages.

Malware is a huge problem too that can result from people carrying malicious browser extensions, and the worst ones being geared to steal login credentials and other sensitive information like payment data. To protect your devices from malicious browser plugins you should ensure that you are always downloading them from a source that is proven trustworthy, and checking reviews and ratings is recommended and a good way to do that.

DNA Storage the Solution for Data Storage Capacity Demands

No getting around the fact that as so much of the workings of everyday life becomes more digital for all of us, there is an ever-increasing need for data storage. And try as they might those who make more storage available simply can’t keep pace with the growing demand. Innovation is always the requirement in any situation like this, and it appears that is exactly what’s happened. Everybody will know what DNA is, and how it is the genetic sequence that allows us to become who we are right from the moment we’re conceived.

So most people would struggle to connect the dots in as far as how DNA could be implemented as a means of better data storage, but it appears that’s exactly what’s happening. And it is definitely meeting a need as for many in the space the option of doing away with old data to make space for new data isn’t something they are willing or able to do based on the nature of what they’re making / offering. Estimates are that in just 7+ years from now there will be 7.8 million petabyte deficit between the supply of data storage available and the then-current demand that will be expected for it.

Here at 4GoodHosting we are like any other reliable Canadian web hosting service provider in that we find this stuff very interesting as data storage is very much front and center for all of us too and we can absolutely get how integral it is for some organizations and their operations. We wouldn’t be able to make the connection between genome sequencing and the strands of DNA with data storage, but we have now and that’s what we’ll share with you today with this entry.

Microsoft Leading the Way

The key in all of this is going to be reading and writing data faster, and Microsoft has been leading the way in developing a new method for writing synthetic DNA with a chip that is 1000 times faster so that the higher write throughput works out to lower costs. Which in turn will allow companies to be able to afford to buy / build the additional data storage they’re going to need anyways.

The data dilemma is this. As the volume of data produced by internet activity, digital devices and IoT sensors continues to expand with speed, the problem becomes a question of where to put it all. It’s true that hard disk drives (HDDs) and solid state drives (SSDs) do well with holding and supplying the quantities of data that servers and client devices need to function. But neither of them are really practical for storing vast quantities of information and for long periods of time.

For archival storage Linear Tape-Open (LTO) magnetic tape is best, boasting the lowest cost per capacity of any technology. The current generation of tape, LTO-9, has a native capacity of 18TB and at about $8.30 per terabyte it is affordable. Investing in it is key, especially as the alternative – as mentioned – is to delete older data. That’s just not a realistic option, especially for any organization working AI where products are typically informed by large and very exhaustive pools of data.

The only big drawback to LTO tape is that data can only be accessed serially. This makes it hard to locate specific files, and often creates the need to migrate to fresh tape to avoid data loss. Not something that has to be done regularly, but it does need to be done and it can be time consuming.

DNA – As Storage

The four molecular building blocks of DNA: adenine (A), guanine (G), cytosine (C) and thymine (T) can be utilized as an extremely dense and durable form of data storage. What it does is convert binary 1s and 0s into the four-lettered genetic alphabet, with just one gram of DNA being capable of storing 215 PB (220,000 TB) of data. Experts say this will work out to ultra-high density, acceptable cost, and much more in the way of sustainable operations.

As of now though the technology remains unusable at scale, and that’s because of the time it takes to write data to DNA and other challenges that are different in nature. Future datacenters will need everything the SSD, HDD, and tape industries are capable of, with what DNA and optical and perhaps other enterprise storage technologies offer being put in place to supplement them.

Who knows, eventually this is something that becomes commonplace in the web hosting industry here in Canada, and that’s not a far-fetched idea considering how the data storage needs for those like us are also increasing fairly exponentially over recent years and it’s likely that trend will continue.

AI and Machine Learning being Trained at Double Speed

Seems to be no coincidence that the emergence of artificial intelligence and machine learning has occurred at exactly the time that humankind needs it most, as the entire world struggles to keep pace with a global population that continues to explode and that explosion bringing a whole whack of major challenges along with it. Especially when it comes to continuing to provide everyone with what they’ve come to expect in regards to essential services and the like. The need is pressing, but fortunately there are very talented engineers and equally dedicated directors above them who are applying themselves to the best of their ability.

While in the simplest sense this won’t have anything directly to do with providing web hosting, our industry is one that is already in the process of being touched by this trend too as there are definitely applications for better and more efficient data management and data streamlining that can be made possible by machine learning. One of those secondary affects from it could be in better proactivity for managing data overloads that can be caused by unexpected external influences. The major heatwave in the UK 2 weeks ago caused some data centers to have to shut down.

Machine learning may provide systems with the means of getting ahead of the curve with dealing with that so a complete shutdown isn’t needed. What would occur isn’t exactly what i known as load shedding, but the process would be similar; being able to foresee what’s coming and knowing where to best make cuts temporarily so that the cumulative affect of it all isn’t so catastrophic. As a Canadian web hosting provider, those of us here at 4GoodHosting can see all sorts of promise in this.

2x Speed

There are now a set of benchmarks – MLPerf – for machine-learning systems that are determine that they can be trained nearly 2x as quickly as they could last year. The bulk of these training speed gains are thanks to software and systems innovations, but new processors from Graphcore and Intel subsidiary Habana Labs plus others are contributing nicely too.

Previously there was no getting around the fact it took neural networks a REALLY long time to do their task. This is what drove companies like Google to develop machine-learning accelerator chips in house. But the new MLPerf data shows that training time for standard neural networks has gotten a lot less taxing in very little time. Neural networks can now be trained exponentially faster than what you would expect, and this really is a beautiful thing when you understand the big picture relevance of it.

It is prompting machine-learning experts to dream big, especially as the new neural networks continues to outpace computing power. MLPerf is based on 8 benchmark tests:

  • image recognition
  • medical-imaging segmentation
  • two versions of object detection
  • speech recognition
  • natural-language processing
  • recommendation
  • form of gameplay called reinforcement learning

As of now, systems built using Nvidia A100 GPUs have been dominating the results. Much of that can be attributed to Nvidia’s new GPU architecture, Hopper, that was designed with architectural features aimed at speeding training.

as for Google’s offering, TPU v4 features impressive improvements in computations per watt over its predecessor, now being able to compute 1.1 billion billion operations per second. At that scale, the system only needed just over 10 seconds for the image-recognition and natural-language-processing trainings.

Another standard trend is to have an IPU where both chips in the 3D stack do computing. The belief here is that we could see machine-learning supercomputers capable of handling neural networks 1,000 times or more as large as today’s biggest language models.

Better Sequencing Lengths

This advance goes beyond networks themselves, as there is also need for the length of the sequence of data the network to promote reliable accuracy. In simpler terms this relates to how many words a natural-language processor is able to be aware of at any one time, or how large an image a machine vision system is able to view. As of now those don’t really scale up well, but the aim is double its size and then quadruple the scale of the attention layer of the network. The focus remains of building an algorithm that gives the training process an awareness of this time penalty and a way to reduce it.

$52 Billion Spending Bill For US Domestic Semiconductor Production on Its Way

Much has been made of the fact that the worldwide shortage of semiconductor chips has been a big blow to production across all sorts of industries, and caused major supply chain issues. Apparently the number of people who are waiting to be able to buy new vehicles is staggering, and there are plenty of high-tech devices that aren’t available to consumers in the volume they need to be because the manufacturers simply can’t get the chips they need. Currently the majority of the world’s semiconductor chips are made in Taiwan, and there geopolitical concerns about the stability of that industry there long term.

Speaking of long term, one of the biggest concerns about any type of long-term shortage while there is a lack of domestic in-country production in the USA is related IoT (Internet of Things) ready-devices. The way this technology was earmarked for big-time advances in healthcare and other type of societal needs was very considerable, and the risk is that those changes might not roll out as quickly as needed along with the widespread adoption of 5G network usage. Of course there are other interests too, but long story short there are simply not enough semiconductor chips being produced.

This is something beyond the immediate scope for any web hosting provider in Canada, but like most those of us here at 4GoodHosting we are attuned to major advances in the digital world, so this type of news story where semiconductor chip production here in North America is about to potentially get a big boost is newsworthy. So let’s use this week’s entry to look much deeper into this development with our US neighbours.

1 Step Closer

The CHIPS Act looks like it has the number of votes needed to pass the Senate and move on to the House of Representatives, and if it gets approval a massive investment into building a strong domestic industry for semiconductor chip production is about to get started. By encouraging US-based production it is thought that there will be all of the collective expertise needed already in the country and that becoming a leader in world semiconductor chip production shouldn’t take long.

Further, it is believed that other countries where consumer electronic devices are made are keen to say new big-league player enter the game when it is a country that plenty of clout already and should be a stable source of the chips without the kind of instability that some will see with Taiwan and others.

This new bill will then provide around $52 billion in incentives to semiconductor manufacturers to manufacture their chips in the United States. Among those beneficiaries will be Intel, which has been a fervent of the CHIPS Act since its inception. By threatening to scale back a planned Ohio chip plant if the funding doesn’t come through, big player Intel is trying to push this through.

Improved Act

Insiders stress here that this version of the CHIPS Act is very different from the initial $250 billion version that the US Senate approved in 2021. Not being approved by the House may have been a good thing, as it has ended up creating its own version of the bill that the GOP rejected over climate and the fact it didn’t involve dancing monkeys. Apparently this is what the Senate believes will be needed for a bill that can make it through both chambers of Congress.

Another concern but one that may not be talked about so much is the fact that there is also concern about any gains made in semiconductor chips in China and other countries in Asia. One US Senator stated her belief that given up even a sight amount control of the semiconductor market to China and Korea (South) could over 3 years mean losses of up to 5 million US jobs and $2 trillion of GDP. That’s something they’ll want to avoid.

This stands to benefit us here in Canada given the nature of our relationship with the States and how our tech industries feed of each other nicely. More domestic semiconductor chip production here in North America can only benefit us.

New Solar Cell Technology Showing Great Long-Haul Promise

Improving energy storage technology is very much a part of what is going to required of humanity if we are to adhere to the eco-protection goals that are important to all countries, or at least that’s what they’re claiming. Solar has always had the potential to be a part of that, but until recently there hasn’t been the level of focus there needs to be on integrating it to the extent it needs to be. That is changing, and the best example of that will be how many people you see with solar panels on their homes recharging the Tesla or other electric vehicle in their home. But solar cell technology has had its limitations.

Until now that is, or at least possibly. If you think Perovskite is the name of a Slavic rare-earth mineral you’re excused, but what we’re talking about here are 30-year Perovskite solar cells. They’re the key to an emerging class of solar energy technology, and by proving they’re capable of a 30-year working lifetime they are set to change the parameters of what solar technology is capable of. Plus meaning a lot more in the way of environmental friendliness and way less in the way of harmful practices to get the components required to build solar batteries.

Here at 4GoodHosting we’re like any other good Canadian web hosting provider in that we can see the wide-reaching benefits this may foster, and we can relate in a sense in that any way you can make powering data centres more efficient is going to be music to our ears. And likewise for a lot of people who have their work and livelihood in the digital space.

So let’s have a look at these new solar cells with our entry this week.

Serious Lasting Power

We have engineers at Princeton University to thank for this offering, and the first perovskite solar cell with a commercially viable lifetime now marks a major milestone for an emerging class of renewable energy technology. Twenty years was the previous viability threshold for solar cells, but now they’ve added a decade on top of that potentially. They are set to rival the performance of silicon-based cells, which have by and large been unchallenged for their performance since 1954.

Perovskites feature a special crystal structure that makes them ideal for solar cell technology. By manufacturing them at room temperature, having them use much less energy than silicon, and making them cheaper and more sustainable to produce there is little to not suggest they are superior to silicon. Perovskites are also more flexible and transparent, and that means they are better for solar power than the rectangular panels that populate hillsides and rooftops across North America.

They also promise to be commercially viable too. However, one issue do date was that Perovskites are much more fragile than silicon. But not anymore, as now the durability potential for PSCs is quite good. Long-term testing whether they make the grade as durable, consumer-friendly technologies. But as of now, it looks quite promising. Especially as relates to efficiency and stability.

Best of Both

Stability hasn’t improved nearly as quickly as efficiency has with solar cell technology over the past decade plus. But the stability has come far enough that it’s no longer any part of a real liability related to them, but experts say there needs to be sophistication to go along with that stability before there’s widespread adoption and roll out.

Where this started was in early 2020, with the Princeton team’s focus on various device architectures that would maintain relatively strong efficiency while converting enough sunlight to electric power and surviving the onslaught of heat, light, and humidity that until now quickly degraded a solar cell during its lifetime.

What resulted were cells made from different materials in order to optimize light absorption while protecting the most fragile areas from exposure. These ones features an ultra-thin capping layer between two crucial components – the absorbing perovskite layer and a layer made from cupric salt and other substances that would be the conduit for the charge. Once it was determined that the perovskite semiconductor would not burn out in a matter of weeks or months, they realized they may well be onto something legit here.

Solar Cell Star

The capping layer on these cells is only a few atoms thick — more than a million times smaller than the smallest thing a human eye is capable of seeing. This super thin layer is a key part of why perovskites have the potential to handily outdistance the previous threshold of a 20-year lifetime for solar cells.

The next question was how long could they expect peak efficiency to be maintained, and the results were positive there too – basically zero drop after nearly half a year.

Longer story shorter, the efficiency of devices with these cells has been very impressive. While the first PSC showed a power-conversion efficiency of less than 4%, that metric has now been increased nearly tenfold in as many years. This works out to one of fastest improvement ever seen for any class of renewable-energy technology to date.

Perovskites also have a unique ‘tunability’ that allows scientists to make highly specific applications along with the ability to manufacture them locally with low energy inputs. Plus, they are compatible with a credible forecast of extended life coupled with a sophisticated aging process to test a wide array of designs.

Last but not least, there is also reason to believe that Perovskites could be an integral part of bringing silicon together with emerging platforms such as thin-film and organic photovoltaics, which have also made great progress in recent years.