Post by account_disabled on Dec 20, 2023 1:33:40 GMT -5
For decades, this expectation has held true, but now the computer industry is about to put Moore's Law to the test, and could perhaps spell its demise in the near future. FYI, the CEO of NVIDIA declared Moore's Law dead earlier this year. How does this relate to Google's cloud business and network infrastructure? At the Open Networking Foundation Connect event in December 2018, Amin Vahdat, vice president of Google and TechLead for Networking, admitted the end of Moore's Law and revealed the company's conundrum: “Our computing needs are continuing to grow at an astonishing rate.
We will need more tightly coupled accelerators and Buy Bulk SMS Service computations. The structure of the network will play a key role in connecting these two elements.” One way for cloud providers to keep up with the growing demand for computing power is with clustering. Clustering, to put it simply, means putting together multiple computers to work on a single problem, to run processes of a single application. Obviously, a precondition to benefit from such a setup is low latency or serious network capacity. When Google began designing its own hardware in 2004, networking hardware vendors thought in terms of boxes, and routers and switches had to be managed individually, via the command line.
Until then, Google was purchasing groups of switches from vendors like Cisco, spending a fortune per switch. But the equipment still hasn't kept up with the growth. Google needed a different network architecture. Demand on Google's infrastructure was growing exponentially (a 2015 Google research paper states that their network capacity had grown 100-fold in ten years), and their growth was so rapid that the cost of purchasing of existing hardware also pushed them to create their own solutions. Google began building custom switches from silicon chips, adopting a different network topology that was more modular.
We will need more tightly coupled accelerators and Buy Bulk SMS Service computations. The structure of the network will play a key role in connecting these two elements.” One way for cloud providers to keep up with the growing demand for computing power is with clustering. Clustering, to put it simply, means putting together multiple computers to work on a single problem, to run processes of a single application. Obviously, a precondition to benefit from such a setup is low latency or serious network capacity. When Google began designing its own hardware in 2004, networking hardware vendors thought in terms of boxes, and routers and switches had to be managed individually, via the command line.
Until then, Google was purchasing groups of switches from vendors like Cisco, spending a fortune per switch. But the equipment still hasn't kept up with the growth. Google needed a different network architecture. Demand on Google's infrastructure was growing exponentially (a 2015 Google research paper states that their network capacity had grown 100-fold in ten years), and their growth was so rapid that the cost of purchasing of existing hardware also pushed them to create their own solutions. Google began building custom switches from silicon chips, adopting a different network topology that was more modular.