Google Says Its Machine Learning Chip Leaves CPUs, GPUs in the Dust

Without its custom-designed chip for machine learning, Google might have had to build many more data centers to support the intensive computing demands of services such as Image Search, Google Photos and more.

Yesterday Google revealed new performance details about its Tensor Processing Unit (TPU) during a National Academy of Engineering presentation at the Computer History Museum in Mountain View, Calif. It said the TPU has shown in tests that it can outperform CPUs and GPUs in data centers, enabling faster and more energy-efficient computing for complex demands.

As it applied machine learning capabilities to more of its products and applications over the past several years, Google said it realized it needed to supercharge its hardware as well as its software. It launched a "stealthy" project to develop a custom accelerator, and last May revealed that it had been successfully running Tensor Processing Units in its data centers for more than a year.

Faster Chip or Twice as Many Data Centers

"The need for TPUs really emerged about six years ago, when we started using computationally expensive deep learning models in more and more places throughout our products," Norm Jouppi, a distinguished hardware engineer at Google, wrote yesterday on Google's Cloud Platform Blog. "The computational expense of using these models had us worried."

According to Jouppi, the company calculated that Google users employing voice search for just three minutes a day would have overwhelmed existing data center processors running deep neural nets for speech recognition. Keeping up would have required Google to double its current number of data centers, he said.

Google currently operates 15 major data centers around the globe, with eight located in the U.S. and the rest scattered across Europe, Asia and South America.

The company launched its high-priority TPU development project with the goal of boosting computation...

Comments are closed.