AI and Machine Learning being Trained at Double Speed

Seems to be no coincidence that the emergence of artificial intelligence and machine learning has occurred at exactly the time that humankind needs it most, as the entire world struggles to keep pace with a global population that continues to explode and that explosion bringing a whole whack of major challenges along with it. Especially when it comes to continuing to provide everyone with what they’ve come to expect in regards to essential services and the like. The need is pressing, but fortunately there are very talented engineers and equally dedicated directors above them who are applying themselves to the best of their ability.

While in the simplest sense this won’t have anything directly to do with providing web hosting, our industry is one that is already in the process of being touched by this trend too as there are definitely applications for better and more efficient data management and data streamlining that can be made possible by machine learning. One of those secondary affects from it could be in better proactivity for managing data overloads that can be caused by unexpected external influences. The major heatwave in the UK 2 weeks ago caused some data centers to have to shut down.

Machine learning may provide systems with the means of getting ahead of the curve with dealing with that so a complete shutdown isn’t needed. What would occur isn’t exactly what i known as load shedding, but the process would be similar; being able to foresee what’s coming and knowing where to best make cuts temporarily so that the cumulative affect of it all isn’t so catastrophic. As a Canadian web hosting provider, those of us here at 4GoodHosting can see all sorts of promise in this.

2x Speed

There are now a set of benchmarks – MLPerf – for machine-learning systems that are determine that they can be trained nearly 2x as quickly as they could last year. The bulk of these training speed gains are thanks to software and systems innovations, but new processors from Graphcore and Intel subsidiary Habana Labs plus others are contributing nicely too.

Previously there was no getting around the fact it took neural networks a REALLY long time to do their task. This is what drove companies like Google to develop machine-learning accelerator chips in house. But the new MLPerf data shows that training time for standard neural networks has gotten a lot less taxing in very little time. Neural networks can now be trained exponentially faster than what you would expect, and this really is a beautiful thing when you understand the big picture relevance of it.

It is prompting machine-learning experts to dream big, especially as the new neural networks continues to outpace computing power. MLPerf is based on 8 benchmark tests:

  • image recognition
  • medical-imaging segmentation
  • two versions of object detection
  • speech recognition
  • natural-language processing
  • recommendation
  • form of gameplay called reinforcement learning

As of now, systems built using Nvidia A100 GPUs have been dominating the results. Much of that can be attributed to Nvidia’s new GPU architecture, Hopper, that was designed with architectural features aimed at speeding training.

as for Google’s offering, TPU v4 features impressive improvements in computations per watt over its predecessor, now being able to compute 1.1 billion billion operations per second. At that scale, the system only needed just over 10 seconds for the image-recognition and natural-language-processing trainings.

Another standard trend is to have an IPU where both chips in the 3D stack do computing. The belief here is that we could see machine-learning supercomputers capable of handling neural networks 1,000 times or more as large as today’s biggest language models.

Better Sequencing Lengths

This advance goes beyond networks themselves, as there is also need for the length of the sequence of data the network to promote reliable accuracy. In simpler terms this relates to how many words a natural-language processor is able to be aware of at any one time, or how large an image a machine vision system is able to view. As of now those don’t really scale up well, but the aim is double its size and then quadruple the scale of the attention layer of the network. The focus remains of building an algorithm that gives the training process an awareness of this time penalty and a way to reduce it.

$52 Billion Spending Bill For US Domestic Semiconductor Production on Its Way

Much has been made of the fact that the worldwide shortage of semiconductor chips has been a big blow to production across all sorts of industries, and caused major supply chain issues. Apparently the number of people who are waiting to be able to buy new vehicles is staggering, and there are plenty of high-tech devices that aren’t available to consumers in the volume they need to be because the manufacturers simply can’t get the chips they need. Currently the majority of the world’s semiconductor chips are made in Taiwan, and there geopolitical concerns about the stability of that industry there long term.

Speaking of long term, one of the biggest concerns about any type of long-term shortage while there is a lack of domestic in-country production in the USA is related IoT (Internet of Things) ready-devices. The way this technology was earmarked for big-time advances in healthcare and other type of societal needs was very considerable, and the risk is that those changes might not roll out as quickly as needed along with the widespread adoption of 5G network usage. Of course there are other interests too, but long story short there are simply not enough semiconductor chips being produced.

This is something beyond the immediate scope for any web hosting provider in Canada, but like most those of us here at 4GoodHosting we are attuned to major advances in the digital world, so this type of news story where semiconductor chip production here in North America is about to potentially get a big boost is newsworthy. So let’s use this week’s entry to look much deeper into this development with our US neighbours.

1 Step Closer

The CHIPS Act looks like it has the number of votes needed to pass the Senate and move on to the House of Representatives, and if it gets approval a massive investment into building a strong domestic industry for semiconductor chip production is about to get started. By encouraging US-based production it is thought that there will be all of the collective expertise needed already in the country and that becoming a leader in world semiconductor chip production shouldn’t take long.

Further, it is believed that other countries where consumer electronic devices are made are keen to say new big-league player enter the game when it is a country that plenty of clout already and should be a stable source of the chips without the kind of instability that some will see with Taiwan and others.

This new bill will then provide around $52 billion in incentives to semiconductor manufacturers to manufacture their chips in the United States. Among those beneficiaries will be Intel, which has been a fervent of the CHIPS Act since its inception. By threatening to scale back a planned Ohio chip plant if the funding doesn’t come through, big player Intel is trying to push this through.

Improved Act

Insiders stress here that this version of the CHIPS Act is very different from the initial $250 billion version that the US Senate approved in 2021. Not being approved by the House may have been a good thing, as it has ended up creating its own version of the bill that the GOP rejected over climate and the fact it didn’t involve dancing monkeys. Apparently this is what the Senate believes will be needed for a bill that can make it through both chambers of Congress.

Another concern but one that may not be talked about so much is the fact that there is also concern about any gains made in semiconductor chips in China and other countries in Asia. One US Senator stated her belief that given up even a sight amount control of the semiconductor market to China and Korea (South) could over 3 years mean losses of up to 5 million US jobs and $2 trillion of GDP. That’s something they’ll want to avoid.

This stands to benefit us here in Canada given the nature of our relationship with the States and how our tech industries feed of each other nicely. More domestic semiconductor chip production here in North America can only benefit us.

New Solar Cell Technology Showing Great Long-Haul Promise

Improving energy storage technology is very much a part of what is going to required of humanity if we are to adhere to the eco-protection goals that are important to all countries, or at least that’s what they’re claiming. Solar has always had the potential to be a part of that, but until recently there hasn’t been the level of focus there needs to be on integrating it to the extent it needs to be. That is changing, and the best example of that will be how many people you see with solar panels on their homes recharging the Tesla or other electric vehicle in their home. But solar cell technology has had its limitations.

Until now that is, or at least possibly. If you think Perovskite is the name of a Slavic rare-earth mineral you’re excused, but what we’re talking about here are 30-year Perovskite solar cells. They’re the key to an emerging class of solar energy technology, and by proving they’re capable of a 30-year working lifetime they are set to change the parameters of what solar technology is capable of. Plus meaning a lot more in the way of environmental friendliness and way less in the way of harmful practices to get the components required to build solar batteries.

Here at 4GoodHosting we’re like any other good Canadian web hosting provider in that we can see the wide-reaching benefits this may foster, and we can relate in a sense in that any way you can make powering data centres more efficient is going to be music to our ears. And likewise for a lot of people who have their work and livelihood in the digital space.

So let’s have a look at these new solar cells with our entry this week.

Serious Lasting Power

We have engineers at Princeton University to thank for this offering, and the first perovskite solar cell with a commercially viable lifetime now marks a major milestone for an emerging class of renewable energy technology. Twenty years was the previous viability threshold for solar cells, but now they’ve added a decade on top of that potentially. They are set to rival the performance of silicon-based cells, which have by and large been unchallenged for their performance since 1954.

Perovskites feature a special crystal structure that makes them ideal for solar cell technology. By manufacturing them at room temperature, having them use much less energy than silicon, and making them cheaper and more sustainable to produce there is little to not suggest they are superior to silicon. Perovskites are also more flexible and transparent, and that means they are better for solar power than the rectangular panels that populate hillsides and rooftops across North America.

They also promise to be commercially viable too. However, one issue do date was that Perovskites are much more fragile than silicon. But not anymore, as now the durability potential for PSCs is quite good. Long-term testing whether they make the grade as durable, consumer-friendly technologies. But as of now, it looks quite promising. Especially as relates to efficiency and stability.

Best of Both

Stability hasn’t improved nearly as quickly as efficiency has with solar cell technology over the past decade plus. But the stability has come far enough that it’s no longer any part of a real liability related to them, but experts say there needs to be sophistication to go along with that stability before there’s widespread adoption and roll out.

Where this started was in early 2020, with the Princeton team’s focus on various device architectures that would maintain relatively strong efficiency while converting enough sunlight to electric power and surviving the onslaught of heat, light, and humidity that until now quickly degraded a solar cell during its lifetime.

What resulted were cells made from different materials in order to optimize light absorption while protecting the most fragile areas from exposure. These ones features an ultra-thin capping layer between two crucial components – the absorbing perovskite layer and a layer made from cupric salt and other substances that would be the conduit for the charge. Once it was determined that the perovskite semiconductor would not burn out in a matter of weeks or months, they realized they may well be onto something legit here.

Solar Cell Star

The capping layer on these cells is only a few atoms thick — more than a million times smaller than the smallest thing a human eye is capable of seeing. This super thin layer is a key part of why perovskites have the potential to handily outdistance the previous threshold of a 20-year lifetime for solar cells.

The next question was how long could they expect peak efficiency to be maintained, and the results were positive there too – basically zero drop after nearly half a year.

Longer story shorter, the efficiency of devices with these cells has been very impressive. While the first PSC showed a power-conversion efficiency of less than 4%, that metric has now been increased nearly tenfold in as many years. This works out to one of fastest improvement ever seen for any class of renewable-energy technology to date.

Perovskites also have a unique ‘tunability’ that allows scientists to make highly specific applications along with the ability to manufacture them locally with low energy inputs. Plus, they are compatible with a credible forecast of extended life coupled with a sophisticated aging process to test a wide array of designs.

Last but not least, there is also reason to believe that Perovskites could be an integral part of bringing silicon together with emerging platforms such as thin-film and organic photovoltaics, which have also made great progress in recent years.

Advantages and Disadvantage of Multiple Domains for Business Websites 

Everyone has their own areas of expertise, and there will always be others who are savvier than you when it comes to a certain subject. There are plenty of people who possess a lot of business savvy, but when it comes positioning their business in the digital space, they are quick to admit that’s not their forte. They delegate that part of their operations to someone who does have the needed know how, and they’re much better off for it. Making the right decisions in that sphere is more important than ever these days though, as so many businesses have a lot riding on their online presence.

Choosing a domain is part of that, but it’s not as difficult a choice as choosing whether or not you’ll have more than one of them. Here at 4GoodHosting we’re like any Canadian web hosting provider in that domains are very much our specialty, and one thing we know is that business owners often come to a point where they need to decide if they’ll be using multiple domains for more than one website for their business. There are advantages to that, and disadvantages too so what we’ll do with our entry this week is compare them and hopefully put your more in the know if you need to make a similar decision for yourself.

Securing & Supplementation

Many times, this is done because there is a need to pointing or redirecting multiple domain names to one or two websites only. In this situation the additional domains are serving to secure business name or supplementing type-in traffic. Additional domains here may be good if you have a product or service that appeals to different audiences. Having one for each target audience makes it possible to customize the messaging, sales materials, and other marketing strategies that will be more conducive to obtaining certain type of potential customers.

On the other side of that, multiple domains can be detrimental to page rankings. Many still park multiple domains as aliases for their main website, but that is less and less common these days and for good reason. One thing we can tell you with authority is that SEO is done on a single domain name and incorporates many things such as site popularity, the volume of content on the site and what type it is, keywords in meta and title fields, and paying for spots in search engine databases.

Protecting your website name or business brand is usually the primary aim when someone chooses to go with multiple domains. Best practice here is to register similar, complementary domains closely related to your site name, along with making sure you do it before anyone else has the same chance to. Generally speaking, multiple domain names don’t bring much of an advantage to website rankings, but they are effective for safe keeping of your brand recognition and reputation.

Look Long Term

The long-term outlook needs to be on building strong authority sites and if your website serves a simple purpose then using multiple pages on the same site will likely be adequate. But it is unavoidable that this will mean more work for you or your webmaster, with each site needing unique content, regular updates, and SEO optimization and ongoing monitoring. And of course, this will also mean more expense, along with economies of scale for hosting and other services that need to be weighed against the value added to the goals for the sites.

Another point is to make sure your marketing messages are consistent across all platforms, and that includes however many domains and websites you are operating.

Advantages of Multiple Domains

  • Good fit for single businesses with diverse audiences – separate sites makes it possible to tailor content as well as approach an individual group
  • Showcasing specialization for a niche website – better for supporting the development of deep, topic-specific content, which makes your site a more valuable resource and one that is more likely to be linked to (which is a big plus for SEO)
  • High turnover – good for any type of business or venture where name changes are common. Multiple domain names can be helpful to leverage an established identity or geographic presence.
  • Better for multiple countries & multiple language – more than one domain lets have separate sites for different geographic locations, and you also have the opportunity to tailor content and images to match social and cultural norms in these locations, Matching the different sites to local preferences and habits can also make the URL easier to find, which is a huge plus when it is an e-commerce website of any kind.

Disadvantages of Multiple Domains

  • Ranking – having multiple sites can at times see the sites penalized for having identical or duplicate, and this can be true even when the content is published in different languages. Scattered back links to your sites or being subject to bad links redirecting to phishing sites are ongoing possibilities when you have multiple domains
  • Difficulty Locating – the tendency is for people to look up a company by name. When one has multiple domain names it can become difficult for a prospective customer to find what they need.
  • Authority loss – In situations where domain names need to be changed, there can be a loss of authority based on the perception of some people
  • Expense – The more domains and sites you have, the more time and money that is required for them
  • Identity dilution – having branded products split off on different sites may undermine the power and market influence of a company
  • Merging difficulties – should you decide to move back to a single domain and website, migration of resources from the other ones to it can be expensive.

New Google Password Manager Definitely Makes the Grade

Some of you may not remember it, but there was once a time when you could create any password you liked and it did not have to include any capped letters or numbers at all. You could add them if you like, but they certainly weren’t necessary. Most of you will be able to tell us why it became necessary to add them, but for those who don’t it’s entirely because passwords are much more easily hacked than they used to be. That of course goes step in step with the growth of cyber security threats of all sorts, and all of this complexity goes along with the fact that we all have more passwords than we’ve ever had before.

You’ll be able to say the exact same thing at this time next year, and the year after that. Some people aren’t trusting enough and will use a password manager app like RoboForm or something similar. But the majority of us are trusting enough to let Google Password Manager handle the task the majority of the time. There may be a few entryways that are more sensitive than others, and they do always give you the option of whether you want Google to do the managing (and remembering) of that password.

Now if there’s any who keep all of their passwords scribbled onto a piece of paper and tucked into their wallet we’d love to hear of it, but here at 4GoodHosting we are like any good Canadian web hosting provider in that we and everyone we know are perfectly happy to let CPUs handle that. And that leads to today’s entry topic, where we’ll talk about how the new Google Password Manager is improved to the extent that even the most distrusting of us should give it a try.

Superior Management

The overhaul to the Google Passwords platform we are discussing here is part of Google Chrome, and it was announced less than a week ago. What the overhaul is promising to do is allow Android and the Chrome browser password management to communicate within each other, detect passwords that have been security breached, plus better and more intuitive help for resetting those passwords if breaches are detected.

It is true that breach detection was already incorporated into Chrome. However, it wasn’t in place and operational for all places that Google password management existed, and that makes no mention of the fact it was entirely absent for Android. It will be now though and included as well will be an easier way to switch up a password when one is discovered to be compromised and / or stolen. This is a nice new security feature.

Better securing user accounts with better – and tougher – passwords has been a primary focus for security researchers for a very long time now. It needs to be mentioned that there are some industry insiders who believe we don’t even need to use passwords anymore. That may or may not be true, but what is indisputable is that people DO have a habit of coming up with insufficient password protection. Then there’s the way some users will be re-using the same password for multiple websites.

Convenient? Sure, but it makes them vulnerable anywhere and everywhere those identical login credentials are being used.

Better Password Security, Made Easier

You may not have any wherewithal regarding development, but trust us when we say that making a fundamental change to login structures is not easy. So the focus at Google became to make it easier for the average user to create more secure passwords instead. That is now possible via the Chrome browser as well as new hooks in any Android OS.

But Google has gone one step further, and unlocked a new Google Passwords website. This page will allow you to utilize Google password management without needing to be in/ on Android or Chrome, and still coming with the same bells and whistles provided to those who do have Google accounts. Some features are re-rolled out when creating complex passwords for websites, and you will still have auto-form filling for password management if you’re on Chrome or Android. Auto-login options are still available for iOS, but using the iOS edition of the Chrome browser for it to work will be required.

There’s more to say about this but blog entries should only be so long. Using password managers rather just re-using the same passwords or choosing ones that may be insecure is really the better choice. All of the information about these updates is available on Google’s new Google Passwords page, and if we’re to be honest the functionality of it and the way it takes something off your plate has even more to be said for it now.

New AI Breakthrough Has Big Potential to for Engineers

None of us were alive when Greek Mythology was in its heyday, but if you were you’d certainly now that Cerberus was the 3-headed dog that guarded the gates to Hades. Sure, there will be a few history buffs alive today that know of this mythological creature, but it really is a shame that generally speaking humans don’t read books like they used to. But enough about that, this blog is about anything and everything related to web hosting and / or computer development so fair to ask where we’re going with this.

No one at Cerebras Systems has more than one head and will never have been anywhere near any entrance to the Underworld. But Cerebras does have the distinction of being the maker of the world’s largest processor. And if you’re an engineer who works with development in AI then you may well think they’ve outdone themselves with their newest offering to the development world. Do a little research into AI and you’ll learn that the capacity it has is entirely based on being able to add more parameters to it.

20 billion is a big number, and so the reason this news around the CS-2 system and WSE-2 chip is so big is because that the newest offering from Cerebras is able to train AI models with up to 20 billion parameters. This is the ultimate in optimization at the software level, and that’s why it is something that a top Canadian web hosting provider like those of us here at 4GoodHosting are going to take an interest in. Like most, we have a nice front row seat for everything that major advances in computing technology have promise for doing for us.

So let’s use this week’s entry to have a much deeper look into these CS-2 devices. Long before AI technology becomes commonplace in darn near everything of course, but it’s good to be early rather than late.

No More Mass Partitioning

What this all promises to do is resolve one of the most frustrating problems for AI engineers. That being the need to partition large-scale models across thousands of GPUs in order to facilitate full cross-compatibility. What the CS-2 promises to do is drastically reduced the time it takes to develop and train new models.

Natural Language Processing has undeniable benefits, but the degree of functionality for them is entirely dependent on the number of these parameters that can be accommodated. To date the way it has worked is that the performance of the model correlates in a linear fashion with the number of parameters. Larger models provide better results, but these days the development of large-scale AI products traditionally requires a large number of GPUs or accelerators, with the models spread across them.

The wheels fall off when there are too many parameters to be housed within memory or compute performance is incapable of handing training workloads. Compounding the problem then is the way the process is unique to each network compute cluster pair, so each cluster has to be catered to individually and that makes the whole thing so much more of a drawn-out process. That’s time many of these engineers will be displeased to be losing.

Bottleneck Buster

It is true that the most complex models consist of many more than 20 billion parameters, but it may be possible now that the ability to train relatively large-scale AI models on a single CS-2 device will do away with these bottlenecks and majorly accelerate development for existing players, plus increasing access for those previously unable to participate in the space. More cooks in the kitchen is a plus when it comes to high tech development much of the time.

The consensus seems to be that the CS-2 may we have the ability to bring large language models to the masses in a cost-efficient way, and this greater ease of access may well be the stimulus needed for a new era in AI where big new steps are made and in the long run society benefits from it in many different ways.

There’s potentially more to this too, as according to Cerebras the CS-2 system may be able to accommodate even bigger models down the line, perhaps even capable of trillions of parameters. If that’s true, chaining together multiple CS-2 systems could start us down the path to AI networks that are more expansive than the human brain.

Tips for Protecting GPUs and CPUs from Heatwaves

If you’re from these parts of North America you won’t soon forget how the weather was about exactly a year ago from now. Late June of 2021 saw the West Coast hit with a heatwave that put every other one EVER before it to shame with just how incredibly hot it got. The mercury soared to well over 40 degrees in most places and one location not too far from here recorded the hottest temperature in Canadian history. To say it was sweltering hot would be an understatement, and unfortunately it seems that we’re all living in a much, much warmer world than we used to.

We are not going to segue into discussing climate change and / or global warming here, but what we will do is say that it is not just us humans and the animals around us that stand to be worse for wear because of extreme temperatures. Turns out they can be plenty harmful for the digital devices a lot of us rely on for work, and the same ones that nearly all of us will use for entertainment even if we don’t spend our workday in front of monitors and a keyboard. There a reason most GPUs come with the own radiator and cooling fan, and we’ve all heard our CPU or notebook whirring when it needs to cool itself down.

This is a topic of interest for us here at 4GoodHosting, and we are like any reliable Canadian web hosting provider in that just like many of you we are mix of both those scenarios and if it’s not the desktop or notebook that is being put through its paces it is the mobile. Such is the way with modern life for a lot of us, and anything that puts the operations of that in jeopardy is going to have a target on its back. There’s all sorts of buzz around malware and DDoS attacks and the likes these days, but what about temperature extremes?

Components Kept Cool

Processors and graphics cards are very sensitive to heat, which is why ensuring they have the ability to stay cool when the temperature around them rises drastically is important. This isn’t a new requirement reality, and most will come with some form of cooling solution and better ones depending how premium the product is. In high-strain environments you may also find water-cooling blocks and AIO cooling systems can require a fan and radiator to pull fan-cooled water over the hot surface. The thing is, these solutions will be dependent on having cool air around them if they’re to be effective.

As it is now the system generally only cools in relation to the ambient temperature of its surroundings, and this puts additional strain on our computers and laptops. But how hot is too hot here exactly? This isn’t our area of expertise, but we’ve dug up that typical safe GPU temperatures are usually between 150° to 185° (65° to 85° Celsius).

For CPUs the same values are around 140° and 160° Fahrenheit (60° to 70° Celsius), but anything even near that may be putting your system in jeopardy of overheating to the point of failure. That’s within your power to control, but what if the same reality exists for a CPU or GPU and there’s nothing you can do to remedy the situation?

Stay Cool Tips for Users

So here’s what you can do to safeguard your components if you’re aware that a heatwave is on its way.

  1. Clean Device and Make Sure Vents are Clean

The buildup of dust and fluff is likely occurring in every desktop PC much faster than the owner is aware of, and this may mean your system is already deprived of airflow. The smart move is to use a can of compressed air to gently dust away any buildup every once in a while. If you have a PC case that can be opened up by removing a panel then you should also give your case fans a blast of air to clean them, and make sure that the inside of your system is clear from any dust build-up.

  1. Overclock Less Often

Many PC enthusiasts aim to maximize performance with overclocking. This not only draws additional power, it also increases the average operating temperature and sometimes quite significantly. Unless you absolutely need your system to be overclocked at all times, consider pulling back on the days when the weather is hot and likely going to get hotter.

  1. Get Fanning

Case fans are integral to keeping your CPU and GPU cool. With some units you may have a controller that allows you to bump your fan speed up to 100%, but if not you will need to control this directly on the system by heading into the BIOS again. This is time consuming, so a good option is to install 3rd-party software like SpeedTemp, but be aware that some CPU fans won’t be compatible with it.

A case upgrade may be in order if you have those fans maxed out through SpeedTemp and the unit is still running too hot.

  1. Replace Thermal Paste / Thermal Pads

This one may be well beyond the means of a lot of people, and quite likely if you’ve never opened up a computer before in any capacity. If your build has a few years behind it, consider cleaning off the old thermal paste from the CPU and replacing it with a fresh batch. This may help to bring the heat down.

Some units may not have the best thermal pads to begin with either. We’ll let you look into this one further on your own as we don’t want to on too long here. There are good tutorial videos easily found on YouTube.

Machine Learning for Google Chrome Set to Make Web Browsing Much Better


It’s pretty rare to be entirely satisfied with something, and there’s usually always little things that irk you when it comes to whatever it is. And all that despite being overall fairly satisfied with the experience of it. When it comes to web browsing, it is likely that most people will say it’s the pop ups that annoy them most, and for people who have health concerns that have connections to optic nerve overstimulation they can be more than just a nuisance.

There’s no getting around the need for revenue, so unfortunately pop ups and the like aren’t going anywhere. But the web development giants of the world have made the efforts to improve the experience when using their web browsers over the years. Real improvements haven’t been fast to arrive, and in many ways that’s because the imperfections of browsers like Edge, Firefox, and Chrome are tolerable in the big picture of everything.

So with that understood we can assume that anything that does make news when it comes to an improved web browsing experience has to be something rather big and significant. And it is here with Google making it clear that they are now making real strides implanting machine learning into the function of their flagship Chrome browser. A browser that – as many of you will know – is far and away the one of-choice for most people around the world.

And here at 4GoodHosting we’re like most good Canadian web hosting providers in that we feel similarly, and we’re like anyone anywhere for whom a more enjoyable and tailored web browsing experience is going to sound plenty good. So let’s dig into this here today as it’s something that likely all of us are going to benefit from in the more long-term picture around this.

Specific Task Focuses

It was in a recent blog post that Google announced the rolling out of new features to Chrome via ‘on device’ machine learning. Improving a person’s browsing experience is the goal, by adding several new ML (machine learning) models that will focus on different tasks this will be made possible. For starters, ‘web notifications’ will be handled differently, and we may well be presented with a better and more adaptive tool bar too. Google states these new features will promote a ‘safer, more accessible and more personalized browsing experience’ and one that is better for all different types of users.

Another big plus is that having models run (and stay) on your device instead of in the cloud will promote better privacy, and maybe we can go so far as to think it may eventually keep certain people from feeling they have to go with Duck Duck Go or something similar.

But let’s stick with the indicated changes for web notifications first. What we’re going to see in upcoming versions of Chrome is an on-device machine learning structure that will examine how you interact with notifications. If and when it finds you are routinely denying permissions to certain types of notifications then it will put a stop to you receiving that type in the future. Nearly everyone will be thankful to not have to click ‘dismiss’ every time they pop up, and as a lot of us know they pop up way too often.

Good intentions, sure, but it will be nice if the browser is able to see the pattern and realize that permission is never going to be approved and then stop requesting for it. However, you’re still able to override Google’s prediction if in fact you don’t want to continue having the choice for yourself.

Responsive to Behaviour Patterns

Another aim that they’ve focused on where is to have Chrome change what the tool bar does based on your past behavior. One example would be where people like to use voice search in the morning while on public transit, or more likely when a person is prone to sharing a lot of links. For either scenario Chrome will then anticipate your needs and add either a microphone button or ‘share’ icon in the tool bar to simplify the process for the user.

Customizing it manually will be possible as well, but as of now whether or not this functionality will appear on other platforms is not known. We do know that any time proprietary interest aren’t involved that most other developers will follow suit if the changes to the technology end up being well received.

Google has also been quick to tout the work machine learning is already doing for Chrome users. Even with the current version when you arrive at a web page that page is scanned and compared to a database of known phishing or malicious sites. In the event of a match you are provided with a warning, and you may already be familiar with it – a full-page, all-red page block, that has been standard since March of this year. Google states that it is a reflection of how Chrome is able to now detect 2.5 times more malicious sites than it could before.

Some – Not All – In Next Release

Apparently the smart silencing of notifications will be part of the next release of Chrome, but we may have to wait for certain other offerings that are part of Google’s greater incorporation of machine learning into the flagship web browser. Another question people are asking is if these improvements will be for mobile only, and some people are wondering if overall browser performance might decline by this. Google has said that the adaptive toolbar is one feature that won’t be in the next roll out

This will be an interesting development to keep an eye on, and in large part because as mentioned when improvements are proven positive and well received other tech giants tend to follow suit with regards to their products. And given how much time nearly all of us spend web browsing it’s something that most will have a good level of interest in.

Power-Providing Clothing for Wearable Devices Soon to be a Thing

Most people won’t have heard of power dressing, or if they have they’ll assume it’s a term related to style choices for people who want to make the right impression in business. We will freely admit we’ve never heard of it in either context, but it seems as if the clothing that can power devices is in fact going to be joining the many rather stunning technological advances that have been made over recent years. Indeed, portable power banks in purses have been extremely helpful for people who tend to really put certain devices through their paces during the day, but can we think that they might be replaced en masse by garments?

Well, apparently they may be and what we are talking about here is a flexible waterproof fabric that is built to convert body movement into electrical energy for powering those devices. So if you’re wondering if we’re talking about a pair of pants that powers you up the more you walk – that’s exactly what we are talking about here! It really is quite the eye-opener to dig up all the technological advancements that are occurring in the world, and what we do here each week puts us directly in line to do that.

This is the sort of news that is intriguing for any good Canadian web hosting provider, and obviously that’s because it is going to have an immediate appeal for everyone and not just those who are especially digitally and web savvy, or even a person who relies on someone like us for reliable web hosting. If motion can mean recharging, who’s not going to see the upside in that?

Press on Polymer

If what we’ve read about this is correct, this really is some cool technology in the making and as the researchers in Singapore have noted it is set up to be particularly ideal for people with wearable smart devices that have more in the way of ongoing power needs and / or aren’t quite as easy to be recharging regularly.

The key ingredient in this powering fabric is a polymer that when pressed or squeezed takes vibrations produced by the smallest of motions and then is able to convert them into an electric charge. The material is made with a spandex base layer and reinforced with a rubber-like component. When conducting an experiment with it researchers found that when they tapped a 4cm piece of the material it immediately generated enough electrical energy to light up 100 LED bulbs.

Again, they are referring to this as Power Dressing, and the concept of it has actually been analyzed for more than 20 years before it picked up enough steam to be where it is today and getting closer to realization. One thing that they are finding, however, is that most electricity-producing fabrics don’t hold up to long-term use. They also don’t take very well to being cleaned in washing machines, and that is a hurdle that absolutely has to be overcome.

Work in Progress

So obviously the immediate focusing was getting over that roadblock, and developing something that does not degrade in function after being washed and maintains the same level of electrical output over time. Consider it done, as the development team in Singapore has now done just that and is reporting their revised conductive material doesn’t lose anything when washed, folded, or crumpled. What’s more, they are estimating that it will maintain stable electrical output for five months of wear, and keeping in mind no one’s going to be wearing these garments every day.

The prototype is set to woven into garments, but it may also eventually be compatible with footwear, which is where it would most likely get the most bang-for-buck with charging based on intensity of motion.

It is capable of producing a 2.34-watts-per-square-meter charge in one of 2 ways: either by pressing or squashing in the same way standard piezoelectricity is created., or when it comes into contact or is generating any measure of friction between other materials, including skin or certain other fabrics that would promote it. When attached to an arm, leg, hand, elbow, or even to the insole of a shoe it will be able to harness energy from a range of human movements, including running, playing sports, or roaring through the park trying to keep up with your dog.

The Coming Significance of Platform Engineering in Web Development

A cobbler is the person who will repair a shoe, although it is increasingly rare that anyone gets a pair of shoes repaired these days. In fact most shoes aren’t even made with any real possibility of reattaching a sole or anything similar. In the 21st century you can even say it’s more likely that someone will be putting those same energies into some type of digitally-built tool or resource and one that people will likely get a lot of mileage out of as compared to an inexpensive pair of sneakers. ‘Cobbling’ in that sense is putting together a collection of disparate tools and then making them works as well as can be expected when working with web development.

That’s our roundabout way of introducing platform engineering, which if you haven’t heard of it is likely to be the next big thing in development and a means of bringing technologies together with more functional compatibility for builds that come together with a) much more speed, and b) a whole lot more in the way of getting to a ‘finished’ product sooner. What this stands to be replacing is the way the different home-grown self service frameworks have – for the most part – shown themselves to be brittle, high maintenance, and often way to expensive when the entirety of everything is taken into account.

We’ll get into the meat of what more may be possible with this here, but it goes without saying that here at 4GoodHosting we’re like any other quality Canadian web hosting provider in seeing how this may be a very relevant technological development for the people who do the work behind the scenes that make our digital world what it is and made available with all it’s capable of doing. Interesting stuff for sure.

Engineering Made Common

More succinctly, platform engineering is the practice of building and operating a common platform made available for internal development teams for sharing and using with relation to software release acceleration. The belief is that it will bridge the gap between software and hardware and platform engineers will enable application developers to release more innovative software in less time with much more in the way of efficiency.

Not only if this super relevant for platform engineering, but it is potentially big for the hybrid cloud too. The aim is to have self-service as both a hallmark in DevOps maturity and a key platform engineering attribute that is proven for supporting developer-driven provisioning of both applications and any underlying infrastructure required along with it. The value in that should be self-evident, as today application teams working in hybrid and multi-cloud environments require different workflows, tools, and skillsets across different clouds

This means complexity, and often too much of it for things to get done right. The need is for getting to a final product asap, and the challenge shines a spotlight on the growing demand for a unified approach to platform engineering and developer self-service.

Ideal for Scaling DevOps Initiatives

There are estimates that upwards of 85% of enterprises will come up short with their efforts to scale DevOps initiatives if no type of self-service platform approach is made available for them by the end of next year (2023). To counter that the recommendation is that infrastructure and operations leaders begin appointing platform owners and establishing platform engineering teams. But the important point is that they be building in self-service infrastructure capabilities that are in line with developer needs at the same time.

Similar estimates that by 2025 75% of organisations with platform teams will provide self-service developer portals as means of improving developer experience and giving product innovation a real boost given how much more quickly results are seen and developers are given a better understanding of where the current development path is likely to take them.

Compelling Benefits

Accelerated software development is the big carrot at the end of the stick leading the investment into platform engineering development. By pursuing it with such enthusiasm it ensures application development teams bring that productivity to all aspects of the software delivery cycle. Ongoing examination of the entire software development life cycle from source code to test and development and along into provisioning and production operations is the best way to facilitate the right backside of the development equation.

From there what it promotes is much better processes and platforms that enable application developers to rapidly provision and release software. Teams will probably be using infrastructure automation and configuration management tools like Ansible, Chef, or Puppet. These tools are conducive to continuous automation that extends processes used in software development to infrastructure engineering.  Look for code (IaC) tools such as Terraform to work well for codifying task required for new resource application and continue playing key role in raising the growth of platform engineering and platform engineering teams.

Most notable at this time is Morpheus Data. It is a platform engineered for platform engineers and is set up explicitly to allow self service provisioning of application services into any private or public cloud.  When combined with Dell VxRail hyperconverged infrastructure you will see digital transformations sped up impressively and you’ll also be improving on your cloud visibility at the same time.